翻訳と辞書
Words near each other
・ Percival Ball
・ Percival Barnett
・ Percival Bazeley
・ Perceptions (EP)
・ Perceptions (magazine)
・ Perceptions (This Beautiful Republic album)
・ Perceptions Album
・ Perceptions of Mahmoud Ahmadinejad
・ Perceptions of Pacha
・ Perceptions of religious imagery in natural phenomena
・ PerceptIS
・ Perceptive Pixel
・ Perceptive Software
・ Perceptor
・ Perceptron
Perceptrons (book)
・ Perceptual (album)
・ Perceptual adaptation
・ Perceptual and Motor Skills
・ Perceptual art
・ Perceptual attack time
・ Perceptual audio coder
・ Perceptual computing
・ Perceptual control theory
・ Perceptual dialectology
・ Perceptual hashing
・ Perceptual learning
・ Perceptual load theory
・ Perceptual mapping
・ Perceptual MegaPixel


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Perceptrons (book) : ウィキペディア英語版
Perceptrons (book)

''Perceptrons: an introduction to computational geometry'' is a book written by Marvin Minsky and Seymour Papert and published in 1969. An edition with handwritten corrections and additions was released in the early 1970s. An expanded edition was further published in 1987, containing a chapter dedicated to counter the criticisms made of it in the 1980s.
The main subject of the book is the perceptron, an important kind of artificial neural network developed in the late 1950s and early 1960s. The main researcher on perceptrons was Frank Rosenblatt, author of the book ''Principles of Neurodynamics''. Rosenblatt and Minsky knew each other since adolescence, having studied with a one-year difference at the Bronx High School of Science. They became at one point central figures of a debate inside the AI research community, and are known to have promoted loud discussions in conferences. Despite the dispute, the corrected version of the book published after Rosenblatt's death contains a dedication to him.
This book is the center of a long-standing controversy in the study of artificial intelligence. It is claimed that pessimistic predictions made by the authors were responsible for an erroneous change in the direction of research in AI, concentrating efforts on so-called "symbolic" systems, and contributing to the so-called AI winter. This decision, supposedly, proved to be unfortunate in the 1980s, when new discoveries showed that the prognostics in the book were wrong.
The book contains a number of mathematical proofs regarding perceptrons, and while it highlights some of perceptrons' strengths, it also shows some previously unknown limitations. The most important one is related to the computation of some predicates, as the XOR function, and also the important connectedness predicate. The problem of connectedness is illustrated at the awkwardly colored cover of the book, intended to show how humans themselves have difficulties in computing this predicate.〔Minsky-Papert 1972:74 shows the figures in black and white. The cover of the 1972 paperback edition has them printed purple on a red background, and this makes the connectivity even more difficult to descern without the use of a finger or other means to follow the patterns mechanically. This problem is discussed in detail on pp.136ff and indeed involves tracing the boundary.〕
== The XOR affair ==
Some critics of the book state that the authors imply that, since a single artificial neuron is incapable of implementing some functions such as the XOR logical function, larger networks also have similar limitations, and therefore should be dropped. Later research on three-layered perceptrons showed how to implement such functions, therefore saving the technique from obliteration.
There are many mistakes in this story. Although a single neuron can in fact compute only a small number of logical predicates, it was widely known that networks of such elements can compute any possible boolean function. This was known by Warren McCulloch and Walter Pitts, who even proposed how to create a Turing Machine with their formal neurons, is mentioned in Rosenblatt's book, and is even mentioned in the book Perceptrons.〔Cf. Minsky-Papert (1972:232): "... a universal computer could be built entirely out of linear threshold modules. This does not in any sense reduce the theory of computation and programming to the theory of perceptrons."〕 Minsky also extensively uses formal neurons to create simple theoretical computers in his book ''Computation: Finite and Infinite Machines''.
What the book does prove is that in three-layered feed-forward perceptrons (with a so-called "hidden" or "intermediary" layer), it is not possible to compute some predicates unless at least one of the neurons in the first layer of neurons (the "intermediary" layer) is connected with a non-null weight to each and every input. This was contrary to a hope held by some researchers in relying mostly on networks with a few layers of "local" neurons, each one connected only to a small number of inputs. A feed-forward machine with "local" neurons is much easier to build and use than a larger, fully connected neural network, so researchers at the time concentrated on these instead of on more complicated models.
Some other critics, most notably Jordan Pollack note that what was a small proof concerning a global issue (parity) not being detectable by local detectors was interpreted by the community as a rather successful attempt to bury the whole idea—see "Pollack, J. B. (1989). No Harm Intended: A Review of the Perceptrons expanded edition. Journal of Mathematical Psychology, 33, 3, 358-365"

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Perceptrons (book)」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.